James-Stein estimation of the first principal component

نویسندگان

چکیده

The Stein paradox has played an influential role in the field of high dimensional statistics. This result warns that sample mean, classically regarded as usual estimator, may be suboptimal dimensions. development James-Stein addresses this paradox, by now inspired a large literature on theme shrinkage In direction, we develop type estimator for first principal component dimension and low size data set. shrinks eigenvector covariance matrix under spiked model, yields superior asymptotic guarantees. Our derivation draws close connection to original formula so motivation recipe is intuited natural way.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

`p-norm based James-Stein estimation with minimaxity and sparsity

A new class of minimax Stein-type shrinkage estimators of a multivariate normal mean is studied where the shrinkage factor is based on an `p norm. The proposed estimators allow some but not all coordinates to be estimated by 0 thereby allow sparsity as well as minimaxity. AMS 2000 subject classifications: Primary 62C20; secondary 62J07.

متن کامل

Shrinkage to Smooth Non-convex Cone : Principal Component Analysis as Stein Estimation

In Kuriki and Takemura (1997a) we established a general theory of James-Stein type shrinkage to convex sets with smooth boundary. In this paper we show that our results can be generalized to the case where shrinkage is toward smooth non-convex cones. A primary example of this shrinkage is descriptive principal component analysis, where one shrinks small singular values of the data matrix. Here ...

متن کامل

Feature reduction of hyperspectral images: Discriminant analysis and the first principal component

When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has...

متن کامل

James-Stein state filtering algorithms

In 1961, James and Stein discovered a remarkable estimator that dominates the maximum-likelihood estimate of the mean of a p-variate normal distribution, provided the dimension p is greater than two. This paper extends the James–Stein estimator and highlights benefits of applying these extensions to adaptive signal processing problems. The main contribution of this paper is the derivation of th...

متن کامل

Cluster-Seeking James-Stein Estimators

This paper considers the problem of estimating a high-dimensional vector of parameters θ ∈ R from a noisy observation. The noise vector is i.i.d. Gaussian with known variance. For a squared-error loss function, the James-Stein (JS) estimator is known to dominate the simple maximum-likelihood (ML) estimator when the dimension n exceeds two. The JS-estimator shrinks the observed vector towards th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Social Science Research Network

سال: 2021

ISSN: ['1556-5068']

DOI: https://doi.org/10.2139/ssrn.3917693